Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't activate metaslabs with weight 0 #8968

Merged
merged 2 commits into from
Jul 5, 2019

Conversation

pcd1193182
Copy link
Contributor

Signed-off-by: Paul Dagnelie [email protected]

Reviewed By: Igor Kozhukhov [email protected]

Motivation and Context

During testing on DilOS, Igor noticed a consistent panic in the checkpoint tests where a metaslab had an activation weight 0. He shared the core file with me, and we discovered that the metaslab's weight was also 0. This means we activated a metaslab with no space available; there appear to be no safeguards against this behavior. Once that happens, any attempt to passivate that metaslab will result in an assertion failure.

Description

We return ENOSPC in metaslab_activate if the metaslab has weight 0, to avoid this issue. For sanity checking, we also assert that there is no free space in the range tree in that case.

How Has This Been Tested?

Tested on DilOS on the system that was consistently failing; test runs since then have passed.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (a change to man pages or other documentation)

Checklist:

@pcd1193182 pcd1193182 requested review from ahrens and sdimitro June 28, 2019 21:08
@pcd1193182 pcd1193182 added the Status: Code Review Needed Ready for review and testing label Jun 28, 2019
module/zfs/metaslab.c Outdated Show resolved Hide resolved
@codecov
Copy link

codecov bot commented Jun 29, 2019

Codecov Report

Merging #8968 into master will decrease coverage by 0.06%.
The diff coverage is 33.33%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #8968      +/-   ##
==========================================
- Coverage   78.74%   78.67%   -0.07%     
==========================================
  Files         388      388              
  Lines      119972   119975       +3     
==========================================
- Hits        94466    94391      -75     
- Misses      25506    25584      +78
Flag Coverage Δ
#kernel 79.45% <33.33%> (-0.06%) ⬇️
#user 66.5% <33.33%> (-0.19%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 681a85c...d15f616. Read the comment docs.

@sdimitro
Copy link
Contributor

sdimitro commented Jul 1, 2019

Hey @pcd1193182 , we've actually seen this before internally within Delphix (see DLPX-61947 and relevant fix). It may be preferable upstreaming the existing fix from DelphixOS for the following reasons:
[1] It makes metaslab_should_allocate() more correct and cleaner. That function also seems more appropriate semantically for this (e.g. do we have enough space for the allocation?) vs metaslab_activate() (return error on problems that have to do with allocators and their activation flags).
[2] By making the fix in metaslab_should_allocate() we actually find this issue sooner because the check happens earlier in the metaslab_group_alloc_normal() codepath. (e.g. it happens in metaslab_should_allocate() called by find_valid_metaslab() closer to the beginning of metaslab_group_alloc_normal() , instead of later in where metaslab_activate() is called.
[3] The fix has soaked a bit more (at least through our internal test runs).

If you agree I can open a PR to upstream that change

Copy link
Contributor

@sdimitro sdimitro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my comment above

@pcd1193182
Copy link
Contributor Author

@sdimitro That change wouldn't fix this bug. The second call to should_allocate doesn't happen until after we've already activated the metaslab, at which point it is too late; we will attempt to passivate it when we realize we shouldn't allocate. The first call to should_allocate could have happened while the metaslab was loaded, so the behavior wouldn't have changed; the metaslab did have enough space available when we picked it, but didn't afterwards.

Copy link
Contributor

@sdimitro sdimitro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change looks good to me, one small nit.

In metaslab_group_alloc_normal() there exists this comment:

		int activation_error =
		    metaslab_activate(msp, allocator, activation_weight);
		metaslab_active_mask_verify(msp);

		/*
		 * If the metaslab was activated by another thread for
		 * another allocator or activation_weight (EBUSY), or it
		 * failed because another metaslab was assigned as primary
		 * for this allocator (EEXIST) we continue using this
		 * metaslab for our allocation, rather than going on to a
		 * worse metaslab (we waited for that metaslab to be loaded
		 * after all).
		 *
		 * If the activation failed due to an I/O error we skip to
		 * the next metaslab.
		 */
		boolean_t activated;
		if (activation_error == 0) {
			activated = B_TRUE;
		} else if (activation_error == EBUSY ||
		    activation_error == EEXIST) {
			activated = B_FALSE;
		} else {
			mutex_exit(&msp->ms_lock);
			continue;
		}

We may want to add the ENOSPC case together with the EIO one, e.g.

                  ... <cropped> ...
		 * If the activation failed due to an ENOSPC or I/O error we skip
		 * to the next metaslab.
		 */

@pcd1193182
Copy link
Contributor Author

@sdimitro Agreed and done.

@behlendorf
Copy link
Contributor

Agreed and done.

@pcd1193182 when you get a chance can you push the updated version.

@behlendorf behlendorf added Status: Accepted Ready to integrate (reviewed, tested) and removed Status: Code Review Needed Ready for review and testing labels Jul 3, 2019
@behlendorf behlendorf merged commit fe0ea84 into openzfs:master Jul 5, 2019
TulsiJain pushed a commit to TulsiJain/zfs that referenced this pull request Jul 20, 2019
We return ENOSPC in metaslab_activate if the metaslab has weight 0, 
to avoid activating a metaslab with no space available.  For sanity 
checking, we also assert that there is no free space in the range 
tree in that case.

Reviewed-by: Igor Kozhukhov <[email protected]>
Reviewed by: Matt Ahrens <[email protected]>
Reviewed by: Serapheim Dimitropoulos <[email protected]>
Reviewed by: Brian Behlendorf <[email protected]>
Signed-off-by: Paul Dagnelie <[email protected]>
Closes openzfs#8968
TulsiJain pushed a commit to TulsiJain/zfs that referenced this pull request Jul 20, 2019
We return ENOSPC in metaslab_activate if the metaslab has weight 0, 
to avoid activating a metaslab with no space available.  For sanity 
checking, we also assert that there is no free space in the range 
tree in that case.

Reviewed-by: Igor Kozhukhov <[email protected]>
Reviewed by: Matt Ahrens <[email protected]>
Reviewed by: Serapheim Dimitropoulos <[email protected]>
Reviewed by: Brian Behlendorf <[email protected]>
Signed-off-by: Paul Dagnelie <[email protected]>
Closes openzfs#8968
tonyhutter pushed a commit to tonyhutter/zfs that referenced this pull request Dec 24, 2019
We return ENOSPC in metaslab_activate if the metaslab has weight 0,
to avoid activating a metaslab with no space available.  For sanity
checking, we also assert that there is no free space in the range
tree in that case.

Reviewed-by: Igor Kozhukhov <[email protected]>
Reviewed by: Matt Ahrens <[email protected]>
Reviewed by: Serapheim Dimitropoulos <[email protected]>
Reviewed by: Brian Behlendorf <[email protected]>
Signed-off-by: Paul Dagnelie <[email protected]>
Closes openzfs#8968
tonyhutter pushed a commit to tonyhutter/zfs that referenced this pull request Dec 27, 2019
We return ENOSPC in metaslab_activate if the metaslab has weight 0,
to avoid activating a metaslab with no space available.  For sanity
checking, we also assert that there is no free space in the range
tree in that case.

Reviewed-by: Igor Kozhukhov <[email protected]>
Reviewed by: Matt Ahrens <[email protected]>
Reviewed by: Serapheim Dimitropoulos <[email protected]>
Reviewed by: Brian Behlendorf <[email protected]>
Signed-off-by: Paul Dagnelie <[email protected]>
Closes openzfs#8968
tonyhutter pushed a commit that referenced this pull request Jan 23, 2020
We return ENOSPC in metaslab_activate if the metaslab has weight 0,
to avoid activating a metaslab with no space available.  For sanity
checking, we also assert that there is no free space in the range
tree in that case.

Reviewed-by: Igor Kozhukhov <[email protected]>
Reviewed by: Matt Ahrens <[email protected]>
Reviewed by: Serapheim Dimitropoulos <[email protected]>
Reviewed by: Brian Behlendorf <[email protected]>
Signed-off-by: Paul Dagnelie <[email protected]>
Closes #8968
allanjude pushed a commit to KlaraSystems/zfs that referenced this pull request Apr 28, 2020
We return ENOSPC in metaslab_activate if the metaslab has weight 0,
to avoid activating a metaslab with no space available.  For sanity
checking, we also assert that there is no free space in the range
tree in that case.

Reviewed-by: Igor Kozhukhov <[email protected]>
Reviewed by: Matt Ahrens <[email protected]>
Reviewed by: Serapheim Dimitropoulos <[email protected]>
Reviewed by: Brian Behlendorf <[email protected]>
Signed-off-by: Paul Dagnelie <[email protected]>
Closes openzfs#8968
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Accepted Ready to integrate (reviewed, tested)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants